Goto

Collaborating Authors

 information security


Amazon Explains How Its AWS Outage Took Down the Web

WIRED

Plus: The Jaguar Land Rover hack sets an expensive new record, OpenAI's new Atlas browser raises security fears, Starlink cuts off scam compounds, and more. The cloud giant Amazon Web Services experienced DNS resolution issues on Monday leading to cascading outages that took down wide swaths of the web . Monday's meltdown illustrated the world's fundamental reliance on so-called hyperscalers like AWS and the challenges for major cloud providers and their customers alike when things go awry . See below for more about how the outage occurred. US Justice Department indictments in a mob-fueled gambling scam reverberated through the NBA on Thursday.


Information Security Based on LLM Approaches: A Review

Gong, Chang, Li, Zhongwen, Li, Xiaoqi

arXiv.org Artificial Intelligence

Information security is facing increasingly severe challenges, and traditional protection means are difficult to cope with complex and changing threats. In recent years, as an emerging intelligent technology, large language models (LLMs) have shown a broad application prospect in the field of information security. In this paper, we focus on the key role of LLM in information security, systematically review its application progress in malicious behavior prediction, network threat analysis, system vulnerability detection, malicious code identification, and cryptographic algorithm optimization, and explore its potential in enhancing security protection performance. Based on neural networks and Transformer architecture, this paper analyzes the technical basis of large language models and their advantages in natural language processing tasks. It is shown that the introduction of large language modeling helps to improve the detection accuracy and reduce the false alarm rate of security systems. Finally, this paper summarizes the current application results and points out that it still faces challenges in model transparency, interpretability, and scene adaptability, among other issues. It is necessary to explore further the optimization of the model structure and the improvement of the generalization ability to realize a more intelligent and accurate information security protection system.


Interplay of ISMS and AIMS in context of the EU AI Act

Pötsch, Jordan

arXiv.org Artificial Intelligence

The EU AI Act (AIA) mandates the implementation of a risk management system (RMS) and a quality management system (QMS) for high-risk AI systems. The ISO/IEC 42001 standard provides a foundation for fulfilling these requirements but does not cover all EU-specific regulatory stipulations. To enhance the implementation of the AIA in Germany, the Federal Office for Information Security (BSI) could introduce the national standard BSI 200-5, which specifies AIA requirements and integrates existing ISMS standards, such as ISO/IEC 27001. This paper examines the interfaces between an information security management system (ISMS) and an AI management system (AIMS), demonstrating that incorporating existing ISMS controls with specific AI extensions presents an effective strategy for complying with Article 15 of the AIA. Four new AI modules are introduced, proposed for inclusion in the BSI IT Grundschutz framework to comprehensively ensure the security of AI systems. Additionally, an approach for adapting BSI's qualification and certification systems is outlined to ensure that expertise in secure AI handling is continuously developed. Finally, the paper discusses how the BSI could bridge international standards and the specific requirements of the AIA through the nationalization of ISO/IEC 42001, creating synergies and bolstering the competitiveness of the German AI landscape.


Generative AI Models: Opportunities and Risks for Industry and Authorities

Alt, Tobias, Ibisch, Andrea, Meiser, Clemens, Wilhelm, Anna, Zimmer, Raphael, Berghoff, Christian, Droste, Christoph, Karschau, Jens, Laus, Friederike, Plaga, Rainer, Plesch, Carola, Sennewald, Britta, Thaeren, Thomas, Unverricht, Kristina, Waurick, Steffen

arXiv.org Artificial Intelligence

Generative AI models are capable of performing a wide range of tasks that traditionally require creativity and human understanding. They learn patterns from existing data during training and can subsequently generate new content such as texts, images, and music that follow these patterns. Due to their versatility and generally high-quality results, they, on the one hand, represent an opportunity for digitalization. On the other hand, the use of generative AI models introduces novel IT security risks that need to be considered for a comprehensive analysis of the threat landscape in relation to IT security. In response to this risk potential, companies or authorities using them should conduct an individual risk analysis before integrating generative AI into their workflows. The same applies to developers and operators, as many risks in the context of generative AI have to be taken into account at the time of development or can only be influenced by the operating company. Based on this, existing security measures can be adjusted, and additional measures can be taken.


Dangers of AI - Blog on Information Security and other technical topics

#artificialintelligence

By now, the whole world is chattering about'ChatGPT', Bard and other AI chatbots. AI or'Artificial Intelligence' is the concept that is powering these chatbots. 'Artificial Intelligence' as the name suggests is intelligence in machines which seek to mimic human intelligence. AI in chatbot is given the super concoction of computer science knowledge and large data sets to make it give answers on any topic like a super human dictionary. One popular example of AI is the'ChatGPT' chatbot that made its appearance in November of 2022 and was adopted by all in the tech community.


3 Ways ChatGPT Will Change Infosec in 2023

#artificialintelligence

ChatGPT took the world by storm after OpenAI opened it for testing on Nov. 30, 2022. For an industry calloused by years of largely unsatisfying AI and machine learning "innovations," the reactions have been quite telling. Like many who are excited by its potential, I believe this is finally the moment of clarity for how truly revolutionary AI can be for information security. It's also quite sobering, as there are already countless examples of how it changes the game for black hats of all stripes. In one of the first proofs-of-concept, NYU professor Brendan Dolan-Gavitt used ChatGPT to exploit a buffer overflow vulnerability.


Network Security Modelling with Distributional Data

Majumdar, Subhabrata, Subramaniam, Ganesh

arXiv.org Artificial Intelligence

We investigate the detection of botnet command and control (C2) hosts in massive IP traffic using machine learning methods. To this end, we use NetFlow data -- the industry standard for monitoring of IP traffic -- and ML models using two sets of features: conventional NetFlow variables and distributional features based on NetFlow variables. In addition to using static summaries of NetFlow features, we use quantiles of their IP-level distributions as input features in predictive models to predict whether an IP belongs to known botnet families. These models are used to develop intrusion detection systems to predict traffic traces identified with malicious attacks. The results are validated by matching predictions to existing denylists of published malicious IP addresses and deep packet inspection. The usage of our proposed novel distributional features, combined with techniques that enable modelling complex input feature spaces result in highly accurate predictions by our trained models.



Intelligent Zero Trust Architecture for 5G/6G Networks: Principles, Challenges, and the Role of Machine Learning in the context of O-RAN

Ramezanpour, Keyvan, Jagannath, Jithin

arXiv.org Artificial Intelligence

In this position paper, we discuss the critical need for integrating zero trust (ZT) principles into next-generation communication networks (5G/6G). We highlight the challenges and introduce the concept of an intelligent zero trust architecture (i-ZTA) as a security framework in 5G/6G networks with untrusted components. While network virtualization, software-defined networking (SDN), and service-based architectures (SBA) are key enablers of 5G networks, operating in an untrusted environment has also become a key feature of the networks. Further, seamless connectivity to a high volume of devices has broadened the attack surface on information infrastructure. Network assurance in a dynamic untrusted environment calls for revolutionary architectures beyond existing static security frameworks. To the best of our knowledge, this is the first position paper that presents the architectural concept design of an i-ZTA upon which modern artificial intelligence (AI) algorithms can be developed to provide information security in untrusted networks. We introduce key ZT principles as real-time Monitoring of the security state of network assets, Evaluating the risk of individual access requests, and Deciding on access authorization using a dynamic trust algorithm, called MED components. To ensure ease of integration, the envisioned architecture adopts an SBA-based design, similar to the 3GPP specification of 5G networks, by leveraging the open radio access network (O-RAN) architecture with appropriate real-time engines and network interfaces for collecting necessary machine learning data. Therefore, this work provides novel research directions to design machine learning based components that contribute towards i-ZTA for the future 5G/6G networks.